Multi-modal Aggregation for Video Classification
نویسندگان
چکیده
In this paper, we present a solution to Large-Scale Video Classification Challenge (LSVC2017) [1] that ranked the 1st place. We focused on a variety of modalities that cover visual, motion and audio. Also, we visualized the aggregation process to better understand how each modality takes effect. Among the extracted modalities, we found Temporal-Spatial features calculated by 3D convolution quite promising that greatly improved the performance. We attained the official metric mAP 0.8741 on the testing set with the ensemble model.
منابع مشابه
Spatiotemporal Networks for Video Emotion Recognition
Our article presents an audio-visual based multi-modal emotion classification system. Considering the fact of deep learning approaches to facial analysis have recently demonstrated high performance, in our work, we use convolutional neural networks (CNNs) for emotion recognition in video, relying on temporal averaging and pooling operations reminiscent of widely used approaches for the spatial ...
متن کاملMulti-modal analysis for person type classification in news video
Classifying the identities of people appearing in broadcast news video into anchor, reporter, or news subject is an important topic in high-level video analysis. Given the visual resemblance of different types of people, this work explores multi-modal features derived from a variety of evidences, such as the speech identity, transcript clues, temporal video structure, named entities, and uses a...
متن کاملMulti-modal fusion for flasher detection in a mobile video chat application
This paper investigates the development of accurate and efficient classifiers to identify misbehaving users (i.e., “flashers”) in a mobile video chat application. Our analysis is based on video session data collected from a mobile client that we built that connects to a popular random video chat service. We show that prior imagebased classifiers designed for identifying normal and misbehaving u...
متن کاملMultimedia Evidence Fusion for Video Concept Detection via OWA Operator
We present a novel multi-modal evidence fusion method for highlevel feature (HLF) detection in videos. The uni-modal features, such as color histogram, transcript texts, etc, tend to capture different aspects of HLFs and hence share complementariness and redundancy in modeling the contents of such HLFs. We argue that such inter-relation are key to effective multi-modal fusion. Here, we formulat...
متن کاملHarmonium Models for Semantic Video Representation and Classification
Accurate and efficient video classification demands the fusion of multimodal information and the use of intermediate representations. Combining the two ideas into the same framework, we propose a probabilistic approach for video classification using intermediate semantic representations derived from the multi-modal features. Based on a class of bipartite undirected graphical models named harmon...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1710.10330 شماره
صفحات -
تاریخ انتشار 2017